Conditional gradient type methods for composite nonlinear and stochastic optimization

نویسنده

  • Saeed Ghadimi
چکیده

In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it does not cost per iteration as much as general proximal type algorithms. Moreover, we show that existence of this strongly convex term enables us to establish convergence properties of the CGT method when the (weakly) smooth term is possibly nonconvex. In particular, we present a unified analysis for the CGT method in the sense that it achieves the best known convergence rate when the weakly smooth term is nonconvex and possesses (nearly) optimal complexity if it turns out to be convex (and hence the problem is strongly convex). While implementation of the CGT method requires explicitly estimating problem parameters like the level of smoothness for the weakly smooth term, we also present a few variants of this method which relax such estimation. Unlike general proximal type parameter free methods, these variants of the CGT method do not require any additional effort for computing (sub)gradients of the objective function and/or solving extra subproblems in each iteration. We then generalize these methods under stochastic setting and present a few new complexity results. To the best of our knowledge, this is the first time that such complexity results are presented for solving both weakly smooth nonconvex and strongly convex stochastic optimization problems. keywords iteration complexity, nonconvex optimization, strongly convex optimization, conditional gradient type methods, unified methods, weakly smooth functions

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accelerated gradient methods for nonconvex nonlinear and stochastic programming

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using ...

متن کامل

A Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic Optimization

Linear optimization is many times algorithmically simpler than non-linear convex optimization. Linear optimization over matroid polytopes, matching polytopes and path polytopes are example of problems for which we have simple and efficient combinatorial algorithms, but whose non-linear convex counterpart is harder and admit significantly less efficient algorithms. This motivates the computation...

متن کامل

Modeling Stock Return Volatility Using Symmetric and Asymmetric Nonlinear State Space Models: Case of Tehran Stock Market

Volatility is a measure of uncertainty that plays a central role in financial theory, risk management, and pricing authority. Turbulence is the conditional variance of changes in asset prices that is not directly observable and is considered a hidden variable that is indirectly calculated using some approximations. To do this, two general approaches are presented in the literature of financial ...

متن کامل

A Smoothing Stochastic Gradient Method for Composite Optimization

We consider the unconstrained optimization problem whose objective function is composed of a smooth and a non-smooth conponents where the smooth component is the expectation a random function. This type of problem arises in some interesting applications in machine learning. We propose a stochastic gradient descent algorithm for this class of optimization problem. When the non-smooth component h...

متن کامل

Conditional Gradient Sliding for Convex Optimization

In this paper, we present a new conditional gradient type method for convex optimization by utilizing a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time to time, and as a result, c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016